21 research outputs found

    POLICY PROCESSES SUPPORT THROUGH INTEROPERABILITY WITH SOCIAL MEDIA

    Get PDF
    Governments of many countries attempt to increase public participation by exploiting the capabilities and high penetration of the Internet. In this direction they make considerable investments for constructing and operating e-participation websites; however, the use of them has been in general limited and below expectations. For this reason governments, in order to widen e-participation, should investigate the exploitation of the numerous users-driven Web 2.0 social media as well, which seem to be quite successful in attracting huge numbers of users. This paper describes a methodology for the exploitation of the Web 2.0 social media by government organizations in the processes of public policies formulation, through a central platform-toolset providing interoperability with many different social media, and enabling posting and retrieving content from them in a systematic centrally managed and machinesupported automated manner (through their application programming interfaces (APIs)). The proposed methodology includes the use of ‘Policy Gadgets’ (Padgets), which are defined as micro web applications presenting policy messages in various popular Web 2.0 social media (e.g. social networks, blogs, forums, news sites, etc) and collecting users’ interactions with them (e.g. views, comments, ratings, votes, etc.). Interaction data can be used as input in policy simulation models estimating the impact of various policy options. Encouraging have been the conclusions from the analysis of the APIs of 10 highly popular social media, which provide extensive capabilities for publishing content on them (e.g. data, images, video, links, etc.) and also for retrieving relevant user activity and content (e.g. views, comments, ratings, votes, etc.), though their continuous evolution might pose significant difficulties and challenges

    Enabling Semantic Interoperability in e-Government: A System-based Methodological Framework for XML Schema Management at National Level

    Get PDF
    Articulating semantic interoperability in e-Government remains in question as long as the international standardization efforts do not reach a consensus on how to semantically annotate and exchange data, but merely focus on the syntactic aspects by publishing sets of XML Schemas. As one-stop governmental services at national and cross-county level become an imperative, the need for standardized data definitions, codification of existing unstructured information and a framework for managing governmental data in a unified way emerges. Effectively applied to the Greek e-Government National Interoperability Framework, this paper proposes a methodology for designing semantically enriched XML Schemas with which homogenized governmental information complies, based on the UN/CEFACT Core Components Technical Specification (CCTS). A discussion around a prospective architecture for managing large sets of XML Schemas is also motivated in order to recognize the necessary components and the key issues that need to be tackled when designing a Governmental Schema Registry

    Empowering Civic Participation in the Policy Making Process through Social Media

    No full text
    The Web as medium has a high significance in everyday life of the digital society. The circumstance of growing visibility of social media covers a high potential in the range of a more citizen-centric and socially-rooted policy making. These potentials call for novel tools with the capability to analyze society’s input and predict the possible impact of policies. The paper describes a prototype tool set for policy makers that utilizes social media technologies and methods to empower public engagement, enable cross-media platform publishing, feedback tracking / analysis and provide decision support

    Nearest-neighbor Queries in Probabilistic Graphs

    No full text
    Abstract — Large probabilistic graphs arise in various domains spanning from social networks to biological and communication networks. An important query in these graphs is the k nearestneighbor query, which involves finding and reporting the k closest nodes to a specific node. This query assumes the existence of a measure of the “proximity ” or the “distance ” between any two nodes in the graph. To that end, we propose various novel distance functions that extend well known notions of classical graph theory, such as shortest paths and random walks. We argue that many meaningful distance functions are computationally intractable to compute exactly. Thus, in order to process nearest-neighbor queries, we resort to Monte Carlo sampling and exploit novel graph-transformation ideas and pruning opportunities. In our extensive experimental analysis, we explore the trade-offs of our approximation algorithms and demonstrate that they scale well on real-world probabilistic graphs with tens of millions of edges. I
    corecore